international law and human right
Outrage as Google scraps its promise not to use AI for weapons or surveillance
Google has updated its AI ethical guidelines and removed a key pledge not to use the tech in a dangerous way. The company erased the 2018 pledge on Tuesday which stated the tech giant'would not use AI for weapons or surveillance'. The revised policy now shows that Google will only develop AI'responsibly' and in line with'widely accepted principles of international law and human rights.' Google's change has sparked internal backlash as employees called the move'deeply concerning' and that the company should not be involved in'the business of war.' Matt Mahmoudi, Amnesty adviser on AI and human rights, shamed Google for the move, saying the tech giant set a'dangerous precedent.' 'AI-powered technologies could fuel surveillance and lethal killing systems at a vast scale, potentially leading to mass violations and infringing on the fundamental right to privacy,' he added.
- Asia > Middle East > Israel (0.08)
- North America > United States > Pennsylvania (0.05)
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.05)
- Law > International Law (0.59)
- Government > Military (0.57)
- Government > Regional Government > North America Government > United States Government (0.36)
Google now thinks it's OK to use AI for weapons and surveillance
Google has made one of the most substantive changes to its AI principles since first publishing them in 2018. In a change spotted by The Washington Post, the search giant edited the document to remove pledges it had made promising it would not "design or deploy" AI tools for use in weapons or surveillance technology. Previously, those guidelines included a section titled "applications we will not pursue," which is not present in the current version of the document. Instead, there's now a section titled "responsible development and deployment." There, Google says it will implement "appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights." That's a far broader commitment than the specific ones the company made as recently as the end of last month when the prior version of its AI principles was still live on its website.
- Information Technology > Services (0.97)
- Law > International Law (0.61)
Google Lifts a Ban on Using Its AI for Weapons and Surveillance
Google announced Tuesday that it is overhauling the principles governing how it uses artificial intelligence and other advanced technology. The company removed language promising not to pursue "technologies that cause or are likely to cause overall harm," "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people," "technologies that gather or use information for surveillance violating internationally accepted norms," and "technologies whose purpose contravenes widely accepted principles of international law and human rights." The changes were disclosed in a note appended to the top of a 2018 blog post unveiling the guidelines. "We've made updates to our AI Principles. Visit AI.Google for the latest," the note reads.
- Law > International Law (0.63)
- Government > Regional Government > North America Government > United States Government (0.32)
Google Tackles AI Principles: Is It Enough?
Google has released its manifesto of principles guiding its efforts in the artificial intelligence realm – though some say the salvo isn't as complete as it could be. AI is the new golden ring for developers, thanks to its potential to not just automate functions at scale but also to make contextual decisions, based on what it learns over time. This experiential aspect has the capacity to bring immense good to the proceedings of life, in the form of weeding out cyber-threats before they happen, offering smarter recommendations to consumers and improving algorithms, even tracking wildfire risk and monitoring the environments of endangered species – or, on the back-end, it can speed along manufacturing processes or evaluate open-source code for potential flaws. What we don't want, of course, is a Matrix-y, Skynet-y, self-aware network interested in, say, enslaving humans. Google is looking to thread this needle with its latest weigh-in on the AI front, its principles for guiding AI development.
- Law (0.95)
- Information Technology > Security & Privacy (0.90)